68 research outputs found

    AlSub: Fully Parallel and Modular Subdivision

    Full text link
    In recent years, mesh subdivision---the process of forging smooth free-form surfaces from coarse polygonal meshes---has become an indispensable production instrument. Although subdivision performance is crucial during simulation, animation and rendering, state-of-the-art approaches still rely on serial implementations for complex parts of the subdivision process. Therefore, they often fail to harness the power of modern parallel devices, like the graphics processing unit (GPU), for large parts of the algorithm and must resort to time-consuming serial preprocessing. In this paper, we show that a complete parallelization of the subdivision process for modern architectures is possible. Building on sparse matrix linear algebra, we show how to structure the complete subdivision process into a sequence of algebra operations. By restructuring and grouping these operations, we adapt the process for different use cases, such as regular subdivision of dynamic meshes, uniform subdivision for immutable topology, and feature-adaptive subdivision for efficient rendering of animated models. As the same machinery is used for all use cases, identical subdivision results are achieved in all parts of the production pipeline. As a second contribution, we show how these linear algebra formulations can effectively be translated into efficient GPU kernels. Applying our strategies to 3\sqrt{3}, Loop and Catmull-Clark subdivision shows significant speedups of our approach compared to state-of-the-art solutions, while we completely avoid serial preprocessing.Comment: Changed structure Added content Improved description

    Analyzing the Internals of Neural Radiance Fields

    Full text link
    Modern Neural Radiance Fields (NeRFs) learn a mapping from position to volumetric density via proposal network samplers. In contrast to the coarse-to-fine sampling approach with two NeRFs, this offers significant potential for speedups using lower network capacity as the task of mapping spatial coordinates to volumetric density involves no view-dependent effects and is thus much easier to learn. Given that most of the network capacity is utilized to estimate radiance, NeRFs could store valuable density information in their parameters or their deep features. To this end, we take one step back and analyze large, trained ReLU-MLPs used in coarse-to-fine sampling. We find that trained NeRFs, Mip-NeRFs and proposal network samplers map samples with high density to local minima along a ray in activation feature space. We show how these large MLPs can be accelerated by transforming the intermediate activations to a weight estimate, without any modifications to the parameters post-optimization. With our approach, we can reduce the computational requirements of trained NeRFs by up to 50% with only a slight hit in rendering quality and no changes to the training protocol or architecture. We evaluate our approach on a variety of architectures and datasets, showing that our proposition holds in various settings.Comment: project page: nerfinternals.github.i

    Layered Fields for Natural Tessellations on Surfaces

    Get PDF
    Mimicking natural tessellation patterns is a fascinating multi-disciplinary problem. Geometric methods aiming at reproducing such partitions on surface meshes are commonly based on the Voronoi model and its variants, and are often faced with challenging issues such as metric estimation, geometric, topological complications, and most critically parallelization. In this paper, we introduce an alternate model which may be of value for resolving these issues. We drop the assumption that regions need to be separated by lines. Instead, we regard region boundaries as narrow bands and we model the partition as a set of smooth functions layered over the surface. Given an initial set of seeds or regions, the partition emerges as the solution of a time dependent set of partial differential equations describing concurrently evolving fronts on the surface. Our solution does not require geodesic estimation, elaborate numerical solvers, or complicated bookkeeping data structures. The cost per time-iteration is dominated by the multiplication and addition of two sparse matrices. Extension of our approach in a Lloyd's algorithm fashion can be easily achieved and the extraction of the dual mesh can be conveniently preformed in parallel through matrix algebra. As our approach relies mainly on basic linear algebra kernels, it lends itself to efficient implementation on modern graphics hardware.Comment: Natural tessellations, surface fields, Voronoi diagrams, Lloyd's algorith

    Detectability Conditions and State Estimation for Linear Time-Varying and Nonlinear Systems

    Full text link
    This work proposes a detectability condition for linear time-varying systems based on the exponential dichotomy spectrum. The condition guarantees the existence of an observer, whose gain is determined only by the unstable modes of the system. This allows for an observer design with low computational complexity compared to classical estimation approaches. An extension of this observer design to a class of nonlinear systems is proposed and local convergence of the corresponding estimation error dynamics is proven. Numerical results show the efficacy of the proposed observer design technique

    Statistical analysis of discrete dislocation dynamics simulations: initial structures, cross-slip and microstructure evolution

    Get PDF
    Over the past decades, discrete dislocation dynamics simulations have been shown to reliably predict the evolution of dislocation microstructures for micrometer-sized metallic samples. Such simulations provide insight into the governing deformation mechanisms and the interplay between different physical phenomena such as dislocation reactions or cross-slip. This work is focused on a detailed analysis of the influence of the cross-slip on the evolution of dislocation systems. A tailored data mining strategy using the \u27discrete-to-continuous (D2C) framework\u27 allows to quantify differences and to quantitatively compare dislocation structures. We analyze the quantitative effects of the cross-slip on the microstructure in the course of a tensile test and a subsequent relaxation to present the role of cross-slip in the microstructure evolution. The precision of the extracted quantitative information using D2C strongly depends on the resolution of the domain averaging. We also analyze how the resolution of the averaging influences the distribution of total dislocation density and curvature fields of the specimen. Our analyzes are important approaches for interpreting the resulting structures calculated by dislocation dynamics simulations

    MotionDeltaCNN: Sparse CNN Inference of Frame Differences in Moving Camera Videos

    Full text link
    Convolutional neural network inference on video input is computationally expensive and requires high memory bandwidth. Recently, DeltaCNN managed to reduce the cost by only processing pixels with significant updates over the previous frame. However, DeltaCNN relies on static camera input. Moving cameras add new challenges in how to fuse newly unveiled image regions with already processed regions efficiently to minimize the update rate - without increasing memory overhead and without knowing the camera extrinsics of future frames. In this work, we propose MotionDeltaCNN, a sparse CNN inference framework that supports moving cameras. We introduce spherical buffers and padded convolutions to enable seamless fusion of newly unveiled regions and previously processed regions -- without increasing memory footprint. Our evaluation shows that we outperform DeltaCNN by up to 90% for moving camera videos
    • …
    corecore